Goto

Collaborating Authors

 learning strategy-aware linear classifier


Learning Strategy-Aware Linear Classifiers

Neural Information Processing Systems

We address the question of repeatedly learning linear classifiers against agents who are \emph{strategically} trying to \emph{game} the deployed classifiers, and we use the \emph{Stackelberg regret} to measure the performance of our algorithms. First, we show that Stackelberg and external regret for the problem of strategic classification are \emph{strongly incompatible}: i.e., there exist worst-case scenarios, where \emph{any} sequence of actions providing \emph{sublinear} external regret might result in \emph{linear} Stackelberg regret and vice versa. Second, we present a strategy-aware algorithm for minimizing the Stackelberg regret for which we prove nearly matching upper and lower regret bounds. Finally, we provide simulations to complement our theoretical analysis. Our results advance the growing literature of learning from revealed preferences, which has so far focused on ``smoother'' assumptions from the perspective of the learner and the agents respectively.

  emph, learning strategy-aware linear classifier, name change, (5 more...)

Review for NeurIPS paper: Learning Strategy-Aware Linear Classifiers

Neural Information Processing Systems

Additional Feedback: Overall, I think that this paper fits the NeurIPS standard and will be interesting to the community, so I recommend accepting it. However, I have a few questions I wish the authors could answer: • The issue with {\hat y} is puzzling. Almost all prior work on strategic classification assumes that the agents can only misreport their feature vectors while the labels are the agents' inherent property. In this paper, as a byproduct of defining h*, agents can hypothetically change their label too. However, the semi-formal discussion in lines 96-100 forbids it. Formally, this is equivalent to restricting the agent (the optimization problem in line 92) to pick z that with h*(z) y_t.


Review for NeurIPS paper: Learning Strategy-Aware Linear Classifiers

Neural Information Processing Systems

The author response addressed all major concerns of the reviewers. The paper makes an interesting and important contribution to strategic classification, and we are happy to recommend acceptance.


Learning Strategy-Aware Linear Classifiers

Neural Information Processing Systems

We address the question of repeatedly learning linear classifiers against agents who are \emph{strategically} trying to \emph{game} the deployed classifiers, and we use the \emph{Stackelberg regret} to measure the performance of our algorithms. First, we show that Stackelberg and external regret for the problem of strategic classification are \emph{strongly incompatible}: i.e., there exist worst-case scenarios, where \emph{any} sequence of actions providing \emph{sublinear} external regret might result in \emph{linear} Stackelberg regret and vice versa. Second, we present a strategy-aware algorithm for minimizing the Stackelberg regret for which we prove nearly matching upper and lower regret bounds. Finally, we provide simulations to complement our theoretical analysis. Our results advance the growing literature of learning from revealed preferences, which has so far focused on smoother'' assumptions from the perspective of the learner and the agents respectively.